Constrained Portfolio Management Using Action Space Decomposition for Reinforcement Learning

نویسندگان

چکیده

Abstract Financial portfolio managers typically face multi-period optimization tasks such as short-selling or investing at least a particular portion of the in specific industry sector. A common approach to tackle these problems is use constrained Markov decision process (CMDP) methods, which may suffer from sample inefficiency, hyperparameter tuning, and lack guarantees for constraint violations. In this paper, we propose Action Space Decomposition Based Optimization (ADBO) optimizing more straightforward surrogate task that allows actions be mapped back original task. We examine our method on two real-world data construction tasks. The results show new consistently outperforms state-of-the-art benchmark approaches general CMDPs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lyapunov-Constrained Action Sets for Reinforcement Learning

Lyapunov analysis is a standard approach to studying the stability of dynamical systems and to designing controllers. We propose to design the actions of a reinforcement learning (RL) agent to be descending on a Lyapunov function. For minimum cost-to-target problems, this has the theoretical benefit of guaranteeing that the agent will reach a goal state on every trial, regardless of the RL algo...

متن کامل

Independent Factor Reinforcement Learning for Portfolio Management

In this paper we propose to do portfolio management using reinforcement learning (RL) and independent factor model. Factors in independent factor model are mutually independent and exhibit better predictability. RL is applied to each factor to capture temporal dependence and provide investment suggestion on factor. Optimal weights on factors are found by portfolio optimization method subject to...

متن کامل

Deep Reinforcement Learning in Parameterized Action Space

Recent work has shown that deep neural networks are capable of approximating both value functions and policies in reinforcement learning domains featuring continuous state and action spaces. However, to the best of our knowledge no previous work has succeeded at using deep neural networks in structured (parameterized) continuous action spaces. To fill this gap, this paper focuses on learning wi...

متن کامل

Couple Particles in Action Space for Reinforcement Learning

In this paper, we propose a novel action-search particle-filtering algorithm for learning processes. This algorithm is designed to perform search domain reduction and heuristic space segmentation. In this method, each action space is divided into new two segments using two particles. Appropriate search domain reduction can minimize learning time and enable the recognition of the evolutionary pr...

متن کامل

Simultaneous Learning of Action and Space Hierarchies in Reinforcement Learning

This paper presents a new method for the autonomous construction of hierarchical action and state representations in reinforcement learning, aimed at accelerating learning and extending the scope of such systems. In this approach, the agent uses information acquired while learning one task to discover subgoals for similar tasks by analyzing the learned policy using Monte Carlo sampling. The age...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2023

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-33377-4_29